dgamma (Double Gamma) Distribution#
scipy.stats.dgamma is the double gamma distribution: a symmetric, continuous distribution on \(\mathbb{R}\) whose absolute value is Gamma.
A convenient generative story is:
Draw a magnitude \(Y \sim \mathrm{Gamma}(a, \text{scale}=1)\) on \([0,\infty)\).
Draw a sign \(S \in \{+1,-1\}\) with \(\mathbb{P}(S=+1)=\mathbb{P}(S=-1)=\tfrac12\).
Set \(X = S\,Y\).
Learning goals#
Understand what
dgammamodels and how it relates to Gamma and Laplace.Write down the PDF/CDF in clean LaTeX and connect them to incomplete gamma functions.
Derive mean/variance and the likelihood for the shape parameter.
Implement NumPy-only sampling (Marsaglia–Tsang for Gamma + random sign).
Visualize PDF, CDF, and Monte Carlo samples; then use
scipy.stats.dgammaforpdf,cdf,rvs, andfit.
Notebook roadmap#
Title & classification
Intuition & motivation
Formal definition (PDF/CDF)
Moments & properties (MGF/CF/entropy)
Parameter interpretation (shape changes)
Derivations (mean/variance/likelihood)
Sampling & simulation (NumPy-only)
Visualization (PDF/CDF/samples)
SciPy integration (
scipy.stats.dgamma)Statistical use cases (testing/Bayes/generative)
Pitfalls
Summary
import math
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio
import scipy
from scipy import special
from scipy.optimize import minimize_scalar
from scipy.stats import dgamma, kstest, laplace, norm
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
SEED = 7
np.random.seed(SEED)
rng = np.random.default_rng(SEED)
np.set_printoptions(precision=4, suppress=True)
print("numpy", np.__version__)
print("scipy", scipy.__version__)
numpy 1.26.2
scipy 1.15.0
Prerequisites#
Comfort with basic probability (PDF/CDF, expectation, variance)
Familiarity with Gamma functions and the idea of regularized incomplete gamma functions (we’ll define what we need)
Basic numerical computing with NumPy
1) Title & Classification#
Distribution name:
dgamma(double gamma)Type: continuous
Support: \(x \in \mathbb{R}\)
Shape parameter: \(a > 0\)
SciPy uses the common location-scale convention:
In this notebook we focus on the standard form (\(\text{loc}=0\), \(\text{scale}=1\)) unless otherwise stated.
2) Intuition & Motivation#
The double gamma distribution is best understood by splitting it into sign and magnitude:
The magnitude \(|X|\) follows a Gamma distribution.
The sign is a fair coin flip.
So dgamma is a natural model when:
You want a symmetric distribution around 0
With exponential tails (like Laplace), but with extra flexibility near \(0\)
Typical use cases#
Error/noise modeling when residuals are symmetric but not well captured by a Normal (heavier center and exponential tails).
Robust modeling: compared to Gaussian noise, exponential tails reduce the influence of large deviations.
Bayesian priors / regularization:
dgammageneralizes the Laplace prior (the L1/”lasso” prior) and can make the prior either more concentrated at 0 (\(a<1\)) or even repel 0 (\(a>1\)).
Relations to other distributions#
If \(a=1\),
dgammabecomes the Laplace distribution with scale 1: $\(f(x; a=1) = \tfrac12 e^{-|x|}.\)$\(|X| \sim \mathrm{Gamma}(a, 1)\).
For \(a>1\), the distribution becomes bimodal with modes near \(\pm(a-1)\) (because the Gamma magnitude has mode at \(a-1\)).
3) Formal Definition#
3.1 PDF#
For shape parameter \(a>0\), the standard dgamma PDF is
This is an even function (\(f(x)=f(-x)\)), so the distribution is symmetric around 0.
3.2 CDF#
Let \(P(a, z)\) denote the regularized lower incomplete gamma function
Then the dgamma CDF can be written compactly as
with \(\operatorname{sign}(0)=0\) so \(F(0)=\tfrac12\).
Equivalently (piecewise):
SciPy implements \(P(a,z)\) as scipy.special.gammainc(a, z).
def dgamma_logpdf_standard(x: np.ndarray, a: float) -> np.ndarray:
"""Log-PDF of standard dgamma(a) with loc=0, scale=1.
This is implemented explicitly (rather than calling SciPy) to make the
formula transparent and to highlight the behavior at x=0.
"""
if not (a > 0):
raise ValueError("a must be > 0")
x = np.asarray(x, dtype=float)
ax = np.abs(x)
out = np.empty_like(ax)
log_norm = -math.log(2.0) - special.gammaln(a)
pos = ax > 0
out[pos] = (a - 1.0) * np.log(ax[pos]) - ax[pos] + log_norm
# Handle x=0 explicitly to avoid the indeterminate 0 * log(0) when a=1.
zero = ~pos
if np.any(zero):
if a < 1:
out[zero] = np.inf
elif a == 1:
out[zero] = -math.log(2.0)
else:
out[zero] = -np.inf
return out
def dgamma_pdf_standard(x: np.ndarray, a: float) -> np.ndarray:
return np.exp(dgamma_logpdf_standard(x, a))
def dgamma_cdf_standard(x: np.ndarray, a: float) -> np.ndarray:
if not (a > 0):
raise ValueError("a must be > 0")
x = np.asarray(x, dtype=float)
P = special.gammainc(a, np.abs(x)) # regularized lower incomplete gamma
return 0.5 * (1.0 + np.sign(x) * P)
# Quick consistency check against SciPy
xs = np.array([-2.0, -0.5, 0.0, 0.5, 2.0])
a0 = 2.0
print("pdf max abs diff:", np.max(np.abs(dgamma_pdf_standard(xs, a0) - dgamma.pdf(xs, a0))))
print("cdf max abs diff:", np.max(np.abs(dgamma_cdf_standard(xs, a0) - dgamma.cdf(xs, a0))))
pdf max abs diff: 2.7755575615628914e-17
cdf max abs diff: 2.498001805406602e-16
4) Moments & Properties#
Because dgamma is symmetric, all odd moments are 0 (when they exist). Even moments match those of the Gamma magnitude.
4.1 Mean, variance, skewness, kurtosis#
Mean: \(\mathbb{E}[X]=0\).
Variance: $\(\mathrm{Var}(X)=\mathbb{E}[X^2]=\frac{\Gamma(a+2)}{\Gamma(a)} = a(a+1).\)$
Skewness: \(0\).
Kurtosis (non-excess): $\(\kappa = \frac{\mathbb{E}[X^4]}{\mathrm{Var}(X)^2} = \frac{\Gamma(a+4)/\Gamma(a)}{\big(a(a+1)\big)^2} = \frac{(a+2)(a+3)}{a(a+1)}.\)\( Excess kurtosis is \)\kappa - 3$.
More generally, for \(n\in\mathbb{N}\):
4.2 MGF and characteristic function#
Using the sign/magnitude representation with \(Y\sim\mathrm{Gamma}(a,1)\) and \(S\in\{\pm1\}\):
The characteristic function is
4.3 Differential entropy#
The positive and negative halves of the distribution live on essentially disjoint supports, so the entropy decomposes into
where \(Y\sim\mathrm{Gamma}(a,1)\).
For \(\mathrm{Gamma}(a, \text{scale}=1)\), the differential entropy is
with \(\psi\) the digamma function. Therefore
(All entropies here are in nats.)
def dgamma_even_moment(a: float, k: int) -> float:
"""E[X^k] for standard dgamma(a). Returns 0 for odd k."""
if k % 2 == 1:
return 0.0
return float(math.exp(special.gammaln(a + k) - special.gammaln(a)))
def dgamma_theory_summary(a: float) -> dict:
mean = 0.0
var = dgamma_even_moment(a, 2)
m4 = dgamma_even_moment(a, 4)
kurtosis = m4 / (var**2)
excess_kurtosis = kurtosis - 3.0
entropy_nats = math.log(2.0) + a + special.gammaln(a) + (1.0 - a) * special.digamma(a)
return {
"mean": mean,
"variance": var,
"skewness": 0.0,
"kurtosis": float(kurtosis),
"excess_kurtosis": float(excess_kurtosis),
"entropy_nats": float(entropy_nats),
}
a_demo = 0.7
dgamma_theory_summary(a_demo)
{'mean': 0.0,
'variance': 1.1900000000000002,
'skewness': 0.0,
'kurtosis': 8.394957983193281,
'excess_kurtosis': 5.394957983193281,
'entropy_nats': 1.2880073609822311}
5) Parameter Interpretation (shape changes)#
dgamma has a single shape parameter \(a\) (plus optional loc and scale). The parameter \(a\) primarily controls the behavior near 0 and whether the distribution is unimodal vs bimodal.
Start from the log-density for \(x>0\):
Differentiate w.r.t. \(x\):
So:
If \(0<a<1\), the term \(x^{a-1}\) diverges at \(0\) and the density has an infinite spike at 0.
If \(a=1\), the density is Laplace and is maximized at \(0\).
If \(a>1\), setting \(\frac{a-1}{x}-1=0\) gives a mode at \(x=a-1\) on the positive side, and by symmetry another at \(x=-(a-1)\).
The scale parameter in SciPy simply rescales the distribution: if \(X\sim\texttt{dgamma}(a,0,1)\), then \(\sigma X\sim\texttt{dgamma}(a,0,\sigma)\) and $\(\mathrm{Var}(\sigma X)=\sigma^2 a(a+1).\)$
def plot_pdf_family(a_values: list[float], x_max: float = 8.0) -> go.Figure:
xs = np.linspace(-x_max, x_max, 1200)
fig = go.Figure()
for a in a_values:
fig.add_trace(
go.Scatter(
x=xs,
y=dgamma_pdf_standard(xs, a),
mode="lines",
name=f"a={a}",
)
)
fig.update_layout(
title="dgamma PDF for different shape parameters a",
xaxis_title="x",
yaxis_title="f(x)",
)
return fig
plot_pdf_family([0.4, 1.0, 2.0, 5.0], x_max=10.0).show()
6) Derivations#
6.1 Expectation#
Because \(f(x;a)\) is even and \(x\) is odd, the integrand \(x f(x;a)\) is odd. Therefore:
This symmetry argument is often the quickest way to compute the mean.
6.2 Variance#
Since \(\mathbb{E}[X]=0\), we have \(\mathrm{Var}(X)=\mathbb{E}[X^2]\).
Using symmetry:
But
so
6.3 Likelihood (shape parameter)#
For i.i.d. samples \(x_1,\dots,x_n\) from the standard dgamma(a) (loc=0, scale=1), the log-likelihood is
Differentiate w.r.t. \(a\):
where \(\psi(a) = \frac{d}{da}\log\Gamma(a)\) is the digamma function.
Setting the score to zero gives an MLE condition:
There is no closed form for \(\hat a\), but we can solve it with Newton’s method using the trigamma function \(\psi_1(a)\).
def fit_a_mle_standard(x: np.ndarray, *, a0: float | None = None, max_iter: int = 100, tol: float = 1e-12) -> float:
"""MLE for shape a in standard dgamma(a) assuming loc=0, scale=1.
Uses the score equation: digamma(a) = mean(log |x|).
"""
x = np.asarray(x, dtype=float)
ax = np.abs(x)
if np.any(ax == 0):
raise ValueError("Found exact zeros; log|x| is -inf. Add jitter or model rounding explicitly.")
target = float(np.mean(np.log(ax)))
# For large a: digamma(a) ~ log(a - 1/2). So a ≈ exp(target) + 1/2.
a = float(a0 if a0 is not None else max(1e-6, math.exp(target) + 0.5))
for _ in range(max_iter):
f = float(special.digamma(a) - target)
fp = float(special.polygamma(1, a)) # trigamma
step = f / fp
a_new = a - step
if a_new <= 0:
a_new = a / 2.0
if abs(a_new - a) < tol * max(1.0, abs(a)):
return float(a_new)
a = a_new
return float(a)
# Quick sanity check: generate data from SciPy and estimate a
a_true = 2.5
x_synth = dgamma.rvs(a_true, size=50_000, random_state=rng)
a_hat = fit_a_mle_standard(x_synth)
a_true, a_hat
(2.5, 2.4973861961360124)
7) Sampling & Simulation (NumPy-only)#
We want a sampler that uses only NumPy.
Recall the generative story:
Sample \(Y \sim \mathrm{Gamma}(a,1)\).
Sample \(S \in \{\pm 1\}\) uniformly.
Return \(X = S\,Y\).
So the core problem is sampling from a Gamma distribution.
Marsaglia–Tsang (2000) for Gamma(a,1)#
For \(a \ge 1\), Marsaglia–Tsang provides an efficient rejection sampler:
Set \(d = a - 1/3\) and \(c = 1/\sqrt{9d}\).
Repeat:
draw \(Z \sim \mathcal{N}(0,1)\) and set \(V = (1 + cZ)^3\).
draw \(U \sim \mathrm{Uniform}(0,1)\).
accept if \(V>0\) and \(\log U < \tfrac12 Z^2 + d - dV + d\log V\).
Return \(dV\).
For \(0<a<1\), use the standard boost trick:
def gamma_rvs_mt(shape: float, size: int, *, rng: np.random.Generator) -> np.ndarray:
"""Sample Gamma(shape, scale=1) using NumPy only (Marsaglia–Tsang).
References
- Marsaglia, G., & Tsang, W. W. (2000). A Simple Method for Generating Gamma Variables.
"""
if not (shape > 0):
raise ValueError("shape must be > 0")
if size < 0:
raise ValueError("size must be >= 0")
if size == 0:
return np.array([], dtype=float)
if shape < 1.0:
# Boost: Gamma(a) = Gamma(a+1) * U^{1/a}
y = gamma_rvs_mt(shape + 1.0, size, rng=rng)
u = rng.random(size)
return y * (u ** (1.0 / shape))
d = shape - 1.0 / 3.0
c = 1.0 / math.sqrt(9.0 * d)
out = np.empty(size, dtype=float)
filled = 0
while filled < size:
m = size - filled
z = rng.standard_normal(m)
v = (1.0 + c * z) ** 3
u = rng.random(m)
accept = (v > 0) & (np.log(u) < 0.5 * z * z + d - d * v + d * np.log(v))
n_acc = int(np.sum(accept))
if n_acc:
out[filled : filled + n_acc] = d * v[accept]
filled += n_acc
return out
def dgamma_rvs_numpy(a: float, size: int, *, rng: np.random.Generator) -> np.ndarray:
"""Sample standard dgamma(a) using NumPy only."""
y = gamma_rvs_mt(a, size, rng=rng)
s = np.where(rng.random(size) < 0.5, -1.0, 1.0)
return s * y
# Monte Carlo check: sample moments vs theory
a_mc = 2.0
x_mc = dgamma_rvs_numpy(a_mc, 200_000, rng=rng)
print("sample mean:", float(np.mean(x_mc)))
print("sample var :", float(np.var(x_mc)))
print("theory var :", dgamma_theory_summary(a_mc)["variance"])
sample mean: -0.0004996194824755228
sample var : 5.994535773412378
theory var : 6.0
/tmp/ipykernel_1028713/929170082.py:32: RuntimeWarning:
invalid value encountered in log
8) Visualization#
We’ll visualize:
The PDF for different \(a\)
The CDF
A Monte Carlo histogram vs the theoretical PDF
An empirical CDF vs the theoretical CDF
def plot_cdf_family(a_values: list[float], x_max: float = 8.0) -> go.Figure:
xs = np.linspace(-x_max, x_max, 1200)
fig = go.Figure()
for a in a_values:
fig.add_trace(
go.Scatter(
x=xs,
y=dgamma_cdf_standard(xs, a),
mode="lines",
name=f"a={a}",
)
)
fig.update_layout(
title="dgamma CDF for different shape parameters a",
xaxis_title="x",
yaxis_title="F(x)",
)
return fig
plot_cdf_family([0.4, 1.0, 2.0, 5.0], x_max=10.0).show()
a_vis = 0.7
n_vis = 80_000
x_vis = dgamma_rvs_numpy(a_vis, n_vis, rng=rng)
x_max = np.quantile(np.abs(x_vis), 0.995)
xs = np.linspace(-x_max, x_max, 900)
fig = px.histogram(
x=x_vis,
nbins=120,
histnorm="probability density",
title=f"Monte Carlo histogram vs theoretical PDF (a={a_vis})",
)
fig.add_trace(go.Scatter(x=xs, y=dgamma_pdf_standard(xs, a_vis), mode="lines", name="theory pdf"))
fig.update_layout(xaxis_title="x", yaxis_title="density")
fig.show()
/tmp/ipykernel_1028713/929170082.py:32: RuntimeWarning:
invalid value encountered in log
def empirical_cdf(samples: np.ndarray):
xs = np.sort(samples)
ys = np.arange(1, xs.size + 1) / xs.size
return xs, ys
x_ecdf, y_ecdf = empirical_cdf(x_vis)
grid = np.linspace(-x_max, x_max, 800)
fig = go.Figure()
fig.add_trace(go.Scatter(x=x_ecdf[::50], y=y_ecdf[::50], mode="markers", name="empirical CDF"))
fig.add_trace(go.Scatter(x=grid, y=dgamma_cdf_standard(grid, a_vis), mode="lines", name="theory CDF"))
fig.update_layout(title=f"Empirical CDF vs theoretical CDF (a={a_vis})", xaxis_title="x", yaxis_title="F(x)")
fig.show()
9) SciPy Integration (scipy.stats.dgamma)#
SciPy provides a full rv_continuous implementation:
dgamma.pdf(x, a, loc=0, scale=1)dgamma.cdf(x, a, loc=0, scale=1)dgamma.rvs(a, loc=0, scale=1, size=..., random_state=...)dgamma.fit(data)(maximum likelihood fora,loc,scale)
Below we show how to use these and compare to our NumPy-only sampler.
a_true = 2.0
loc_true = -0.5
scale_true = 1.8
x_scipy = dgamma.rvs(a_true, loc=loc_true, scale=scale_true, size=60_000, random_state=rng)
# Fit all parameters
a_fit, loc_fit, scale_fit = dgamma.fit(x_scipy)
print("true (a, loc, scale):", (a_true, loc_true, scale_true))
print("fit (a, loc, scale):", (float(a_fit), float(loc_fit), float(scale_fit)))
# If you know loc/scale, you can fix them and estimate only a
a_fit_fixed, loc_fixed, scale_fixed = dgamma.fit(x_scipy, floc=loc_true, fscale=scale_true)
print("fit with fixed loc/scale:", (float(a_fit_fixed), float(loc_fixed), float(scale_fixed)))
true (a, loc, scale): (2.0, -0.5, 1.8)
fit (a, loc, scale): (1.9895099177905788, -0.4892242478889066, 1.8093153852790433)
fit with fixed loc/scale: (1.997460937500002, -0.5, 1.8)
10) Statistical Use Cases#
10.1 Hypothesis testing / goodness-of-fit#
If you have a proposed model (e.g. dgamma vs Laplace vs Normal), common workflows include:
Goodness-of-fit tests such as Kolmogorov–Smirnov (KS) when parameters are known.
Model comparison via likelihood / AIC / BIC when parameters are estimated.
Caution: if you estimate parameters from the same data you test with, the KS p-values are not calibrated (this is the same issue as the Lilliefors correction for Normality tests). AIC/BIC comparisons are often a better quick diagnostic.
10.2 Bayesian modeling#
As a prior for a coefficient \(\beta\) (centered at 0), dgamma includes Laplace as \(a=1\).
The log-density (standard form) is
So the negative log-prior is
At \(a=1\), this reduces to an L1 penalty (\(|\beta|\)).
For \(a<1\), the prior becomes more spiky at 0 (stronger shrinkage/sparsity).
For \(a>1\), the density is 0 at 0, which can act like a repulsive prior around 0.
10.3 Generative modeling#
Because sampling is easy (Gamma magnitude + random sign), dgamma can be used as a plug-in noise source for simulations where you want symmetric, exponential-tailed noise but more control over the central shape than Laplace.
# 10.1: AIC comparison on synthetic data
n = 20_000
a_true = 1.3
x = dgamma.rvs(a_true, size=n, random_state=rng)
# Fit candidate models
a_dg, loc_dg, scale_dg = dgamma.fit(x)
loc_lap, scale_lap = laplace.fit(x)
mu_n, sd_n = norm.fit(x)
ll_dg = float(np.sum(dgamma.logpdf(x, a_dg, loc=loc_dg, scale=scale_dg)))
ll_lap = float(np.sum(laplace.logpdf(x, loc=loc_lap, scale=scale_lap)))
ll_n = float(np.sum(norm.logpdf(x, loc=mu_n, scale=sd_n)))
# Parameter counts: dgamma has (a, loc, scale)=3; Laplace has (loc, scale)=2; Normal has (mu, sigma)=2
aic_dg = 2 * 3 - 2 * ll_dg
aic_lap = 2 * 2 - 2 * ll_lap
aic_n = 2 * 2 - 2 * ll_n
print("AIC (lower is better)")
print(" dgamma:", aic_dg)
print(" laplace:", aic_lap)
print(" normal:", aic_n)
AIC (lower is better)
dgamma: 77359.24609802279
laplace: 78144.37384621782
normal: 78673.30533222358
# 10.1: KS test with known parameters (calibrated because we don't fit)
a_known = 1.3
x = dgamma.rvs(a_known, size=5_000, random_state=rng)
ks_dg = kstest(x, lambda t: dgamma.cdf(t, a_known))
ks_lap = kstest(x, laplace.cdf) # Laplace(0,1)
ks_n = kstest(x, norm.cdf) # Normal(0,1)
print("KS dgamma(known a):", ks_dg)
print("KS Laplace(0,1): ", ks_lap)
print("KS Normal(0,1): ", ks_n)
KS dgamma(known a): KstestResult(statistic=0.006236167694384731, pvalue=0.9893872415885169, statistic_location=0.8300866939944515, statistic_sign=1)
KS Laplace(0,1): KstestResult(statistic=0.06931275631257258, pvalue=2.482564548958877e-21, statistic_location=-0.9514659718253009, statistic_sign=1)
KS Normal(0,1): KstestResult(statistic=0.09992138634150076, pvalue=6.546842977802951e-44, statistic_location=-1.532669826254665, statistic_sign=1)
# 10.2: Simple 1D MAP estimate under a dgamma prior
# Likelihood: y | theta ~ Normal(theta, sigma^2)
# Prior: theta ~ dgamma(a)
def neg_log_posterior(theta: float, y: float, sigma: float, a: float) -> float:
ll = 0.5 * ((y - theta) / sigma) ** 2 + math.log(sigma * math.sqrt(2.0 * math.pi))
lp = -float(dgamma_logpdf_standard(theta, a))
return ll + lp
y = 0.7
sigma = 0.4
a_values = [0.6, 1.0, 2.5]
thetas = np.linspace(-2.5, 2.5, 1200)
fig = go.Figure()
for a in a_values:
vals = np.array([neg_log_posterior(t, y=y, sigma=sigma, a=a) for t in thetas])
fig.add_trace(go.Scatter(x=thetas, y=vals - vals.min(), mode="lines", name=f"a={a}"))
fig.update_layout(
title="1D MAP objective (shifted): Normal likelihood + dgamma prior",
xaxis_title="theta",
yaxis_title="negative log-posterior (shifted)",
)
fig.show()
for a in a_values:
res = minimize_scalar(neg_log_posterior, bounds=(-3, 3), method="bounded", args=(y, sigma, a))
print(f"a={a}: MAP theta={res.x:.4f}")
a=0.6: MAP theta=0.3643
a=1.0: MAP theta=0.5400
a=2.5: MAP theta=0.8294
# 10.3: Generative modeling example: symmetric noise with tunable central shape
n = 12_000
a_gen = 0.5
noise = dgamma_rvs_numpy(a_gen, n, rng=rng)
x_max = np.quantile(np.abs(noise), 0.995)
xs = np.linspace(-x_max, x_max, 700)
fig = px.histogram(
x=noise,
nbins=120,
histnorm="probability density",
title=f"Noise samples from dgamma(a={a_gen}) (NumPy-only sampler)",
)
fig.add_trace(go.Scatter(x=xs, y=dgamma_pdf_standard(xs, a_gen), mode="lines", name="theory pdf"))
fig.update_layout(xaxis_title="noise", yaxis_title="density")
fig.show()
/tmp/ipykernel_1028713/929170082.py:32: RuntimeWarning:
invalid value encountered in log
11) Pitfalls#
Invalid parameters: the shape must satisfy \(a>0\); SciPy will error (or return
nan) otherwise.Behavior at 0:
for \(a<1\), the PDF diverges at 0 (infinite density). That’s fine mathematically, but it can surprise you numerically.
for \(a=1\), the PDF is finite at 0 (Laplace).
for \(a>1\), the PDF is 0 at 0.
Use
logpdffor stability: for large \(|x|\),pdfunderflows quickly;logpdfis typically stable.MGF domain: \(M_X(t)\) exists only for \(|t|<1\) (tails are \(e^{-|x|}\)).
Fitting caveats:
If your data contain exact zeros (rounding/quantization), \(\log|x|\) becomes \(-\infty\) and the simple MLE derivation breaks.
When fitting
locandscaleas well, likelihood surfaces can be relatively flat; use diagnostics and reasonable constraints.
12) Summary#
dgammais a continuous, symmetric distribution on \(\mathbb{R}\) with PDF \(\propto |x|^{a-1}e^{-|x|}\).It can be generated as sign × Gamma magnitude: \(X=S\,Y\) with \(Y\sim\mathrm{Gamma}(a,1)\).
Special case: \(a=1\) gives the Laplace distribution.
Mean is 0 and variance is \(a(a+1)\) (scale it by
scale^2under SciPy’s parameterization).Sampling is straightforward once you can sample Gamma; Marsaglia–Tsang gives an efficient NumPy-only implementation.
In practice, prefer
scipy.stats.dgammafor production work (pdf,cdf,rvs,fit) and uselogpdffor numerical stability.